13 research outputs found

    Interactive dynamic objects in a virtual light field

    Get PDF
    National audienceThis report builds upon existing work on Virtual Light Fields (VLF). A previous VLF implementation allows interactive walkthrough of a static globally illuminated scene on a modern desktop computer. This report outlines enhancements to this implementation which allow movable geometry to be added to existing VLF solutions. Two diffuse shading modes are implemented for the dynamic geometry. A fast simple mode which approximates the emitters in the VLF using OpenGL light sources and a slower advanced mode which approximates the diffuse inter-reflection and soft shadows received on the dynamic geometry using information from the VLF. In both modes the dynamic geometry casts hard shadows onto existing diffuse geometry in the scene. Both modes can achieve interactive rates on a high specification modern desktop computer, although advanced mode is limited to simple dynamic objects due to the expensive diffuse gathering step. Potential optimisations are discussed

    Developable Surfaces from Arbitrary Sketched Boundaries

    Get PDF
    International audienceDevelopable surfaces are surfaces that can be unfolded into the plane with no distortion. Although ubiquitous in our everyday surroundings, modeling them using existing tools requires significant geometric expertise and time. Our paper simplifies the modeling process by introducing an intuitive sketch-based approach for modeling developables. We develop an algorithm that given an arbitrary, user specified 3D polyline boundary, constructed using a sketching interface, generates a smooth discrete developable surface that interpolates this boundary. Our method utilizes the connection between developable surfaces and the convex hulls of their boundaries. The method explores the space of possible interpolating surfaces searching for a developable surface with desirable shape characteristics such as fairness and predictability. The algorithm is not restricted to any particular subset of developable surfaces. We demonstrate the effectiveness of our method through a series of examples, from architectural design to garments

    Virtual Garments: A Fully Geometric Approach for Clothing Design

    Get PDF
    International audienceModeling dressed characters is known as a very tedious process. It usually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-based simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the adequate folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start with a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a virtual mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a precomputed distance field around the mannequin. The system then splits the created surface into different panels delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our system automatically approximates each panel with a developable surface, while keeping them assembled along the seams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D garment, including the folds due to the collisions with the body and gravity. The folds are generated using procedural modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. The patterns we create also allow us to sew real replicas of the virtual garments

    Croquis et annotation pour la modélisation procédural de phenomènes complexe

    No full text
    This thesis explores the use of sketching and annotation for the 3D modelling of specific complex phenomena (such as hair, trees, clothes etc...). We introduce a methodology based on three core ideas. First - that prior knowledge of the nature of the object being modeled can be used to help interpret the sketch. Second - that the workflow should match that of a traditional artist, with rough global features being sketched before detailed local ones. Third - that sketching can often be seen as annotating an underlying structure, and that this structure can be inferred from silhouettes, reducing the amount of input required from the user. We illustrate the methodology with example interfaces: for designing clothing using various levels of prior knowledge, for modelling hairstyles and for modelling clouds and trees.Cette thèse explore l'utilisation de croquis et d'annotation pour contrôler la modélisation 3D d'objets complexes (comme des chevelures, des arbres, des vêtements, etc). Nous introduisons une méthodologie basée sur trois idées princpales: Premièrement, le fait que l'expression de connaissances à priori sur l'objet modélisé peut être déterminante pour l'interprétation d'un croquis. Deuxièmement, l'idée que la chaîne de traitements doit se rapprocher au maximum de celle utilisée par un artiste, avec en particulier une esquisse de la forme globale avant que les détails ne soient précisés. Troisièmement, le fait que la structure 3D sous jacente peut soit servir de base au croquis (ils s'agit alors d'une technique d'annotation), soit être inférée à partir des silhouettes tracées par l'utilisateur, ce qui réduit grandement la quantité d'information à donner. Nous illustrons cette méthodologie en détaillant quatre exemples d'application: la modélisation par croquis de vêtements (pour laquelle sont comparées différents niveaux de connaissances a priori), de chevelures, de nuages et d'arbres

    Interactive dynamic objects in a virtual light field

    No full text
    National audienceThis report builds upon existing work on Virtual Light Fields (VLF). A previous VLF implementation allows interactive walkthrough of a static globally illuminated scene on a modern desktop computer. This report outlines enhancements to this implementation which allow movable geometry to be added to existing VLF solutions. Two diffuse shading modes are implemented for the dynamic geometry. A fast simple mode which approximates the emitters in the VLF using OpenGL light sources and a slower advanced mode which approximates the diffuse inter-reflection and soft shadows received on the dynamic geometry using information from the VLF. In both modes the dynamic geometry casts hard shadows onto existing diffuse geometry in the scene. Both modes can achieve interactive rates on a high specification modern desktop computer, although advanced mode is limited to simple dynamic objects due to the expensive diffuse gathering step. Potential optimisations are discussed

    Dressing and hair-styling virtual characters from a sketch

    No full text
    International audienceThis chapter presents a different use of sketch-based modeling, namely the modeling of complex objects from a single sketch, illustrated by the examples of dressing and hairstyling a virtual character. Knowing the nature of the object being modeled eases the extraction of information from a sketch, so that a single sketch depicting a front view (and optionally a second one from the back or side) is sufficient to specify these complex 3D shapes. We show how different levels of prior knowledge about the object being modeled, from basic rules of thumb to more intricate geometric or physically based properties, can be used to interpret the sketch strokes and to infer the missing 3D information, leading to different degrees of visual realism. In addition to discussing practical solutions for the sketch-based modeling of garments and hair that save several orders of magnitude of user time compared to standard 3D modeling methods, this chapter provides the basis of a general methodology towards the design of sketch-based interfaces for complex models

    Realistic hair from a sketch

    Get PDF
    Due to the number of individual strands needed for creating a full head of hair, all the methods available so far model hair by positioning a few hundred guide strands, from which extra strands are generated at the rendering stage, using either interpolation, wisp-based models or a combination of both [3]. Both geometric and physically-based methods have been used in the past for shaping guide strands. A number of dedicated, interactive modeling systems were proposed for hair design [5, 12, 27]. Most of them provide the user with precise control of the length, position and curliness of hair, but require a large number of successive manipulations, from the delimitation of the scalp to the positioning and shaping of guide strands. This can lead to several hours for the creation of a single head of hair. This proinria-00337444

    Rapid sketch modeling of clouds

    Get PDF
    Clouds are an important visual element of any natural scene and computer artists often wish to create specific cloud shapes (for example in the film Amélie, as depicted in Fig 1). We describe a sketch based interface for modeling cumulous clouds. This interface allows rapid construction of a 3D cloud surface representation (mesh) using an underlying point based implicit surface representation. This mesh is rendered using the technique [BNM ∗ 08], resulting in a realtime cloud modeling and rendering system

    Structure from silhouettes: a new paradigm for fast sketch-based design of trees

    Get PDF
    Special issue: Eurographics 2009International audienceModeling natural elements such as trees in a plausible way, while offering simple and rapid user control, is a challenge. This paper presents a method based on a new structure from silhouettes paradigm. We claim that sketching the silhouettes of foliage at multiple scales is quicker and more intuitive for a user than having to sketch each branch of a tree. This choice allows us to incorporate botanical knowledge, enabling us to infer branches that connect in a plausible way to their parent branch and have a correct distribution in 3D. We illustrate these ideas by presenting a seamless sketch-based interface, used for sketching foliage silhouettes from the scale of an entire tree to the scale of a leaf. Each sketch serves for inferring both the branches at that level and construction lines to serve as support for sub-silhouette refinement. When the user finally zooms out, the style inferred for the branching systems he has refined (in terms of branch density, angle, length distribution and shape) is duplicated to the unspecified branching systems at the same level. Meanwhile, knowledge from botany is again used for extending the branch distribution to 3D, resulting in a full, plausible 3D tree that fits the user-sketched contours. As our results show, this system can be of interest to both experts and novice users. While experts can fully specify all parts of a tree and over-sketch specific branches if required, any user can design a basic 3D tree in one or two minutes, as easily as sketching it with paper and pen
    corecore